Qwen3-235B-A22B is the latest generation large language model in the Qwen series, adopting a Mixture of Experts (MoE) architecture with 235 billion parameters and 22 billion active parameters. It excels in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model
Transformers